A Mixture of Experts (MoE) model trained on masked language modeling tasks, with a parameter scale of 1.6 trillion. It uses an architecture similar to T5 but replaces the feed - forward layer with a sparse MLP layer.
Large Language Model
Transformers English